Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Ultra-Lightweight Face Animation Method for Ultra-Low Bitrate Video Conferencing
LU Jianguo, ZHENG Qingfang
ZTE Communications    2023, 21 (1): 64-71.   DOI: 10.12142/ZTECOM.202301008
Abstract5)   HTML0)    PDF (1620KB)(8)       Save

Video conferencing systems face the dilemma between smooth streaming and decent visual quality because traditional video compression algorithms fail to produce bitstreams low enough for bandwidth-constrained networks. An ultra-lightweight face-animation-based method that enables better video conferencing experience is proposed in this paper. The proposed method compresses high-quality upper-body videos with ultra-low bitrates and runs efficiently on mobile devices without high-end graphics processing units (GPU). Moreover, a visual quality evaluation algorithm is used to avoid image degradation caused by extreme face poses and/or expressions, and a full resolution image composition algorithm to reduce unnaturalness, which guarantees the user experience. Experiments show that the proposed method is efficient and can generate high-quality videos at ultra-low bitrates.

Table and Figures | Reference | Related Articles | Metrics
Integrating Coarse Granularity Part-Level Features with Supervised Global-Level Features for Person Re-Identification
CAO Jiahao, MAO Xiaofei, LI Dongfang, ZHENG Qingfang, JIA Xia
ZTE Communications    2021, 19 (1): 72-81.   DOI: 10.12142/ZTECOM.202101009
Abstract49)   HTML2)    PDF (3855KB)(46)       Save

Person re-identification (Re-ID) has achieved great progress in recent years. However, person Re-ID methods are still suffering from body part missing and occlusion problems, which makes the learned representations less reliable. In this paper, we propose a robust coarse granularity part-level network (CGPN) for person Re-ID, which extracts robust regional features and integrates supervised global features for pedestrian images. CGPN gains two-fold benefit toward higher accuracy for person Re-ID. On one hand, CGPN learns to extract effective regional features for pedestrian images. On the other hand, compared with extracting global features directly by backbone network, CGPN learns to extract more accurate global features with a supervision strategy. The single model trained on three Re-ID datasets achieves state-of-the-art performances. Especially on CUHK03, the most challenging Re-ID dataset, we obtain a top result of Rank-1/mean average precision (mAP)=87.1%/83.6% without re-ranking.

Table and Figures | Reference | Related Articles | Metrics
Face Detection, Alignment, Quality Assessment and Attribute Analysis with Multi-Task Hybrid Convolutional Neural Networks
GUO Da, ZHENG Qingfang, PENG Xiaojiang, LIU Ming
ZTE Communications    2019, 17 (3): 15-22.   DOI: 10.12142/ZTECOM.201903004
Abstract110)   HTML47)    PDF (447KB)(71)       Save

This paper proposes a universal framework, termed as Multi-Task Hybrid Convolutional Neural Network (MHCNN), for joint face detection, facial landmark detection, facial quality, and facial attribute analysis. MHCNN consists of a high-accuracy single stage detector (SSD) and an efficient tiny convolutional neural network (T-CNN) for joint face detection refinement, alignment and attribute analysis. Though the SSD face detectors achieve promising results, we find that applying a tiny CNN on detections further boosts the detected face scores and bounding boxes. By multi-task training, our T-CNN aims to provide five facial landmarks, facial quality scores, and facial attributes like wearing sunglasses and wearing masks. Since there is no public facial quality data and facial attribute data as we need, we contribute two datasets, namely FaceQ and FaceA, which are collected from the Internet. Experiments show that our MHCNN achieves face detection performance comparable to the state of the art in face detection data set and benchmark (FDDB), and gets reasonable results on AFLW, FaceQ and FaceA.

Table and Figures | Reference | Related Articles | Metrics